40 research outputs found

    Privacy Preserving Multi-Server k-means Computation over Horizontally Partitioned Data

    Full text link
    The k-means clustering is one of the most popular clustering algorithms in data mining. Recently a lot of research has been concentrated on the algorithm when the dataset is divided into multiple parties or when the dataset is too large to be handled by the data owner. In the latter case, usually some servers are hired to perform the task of clustering. The dataset is divided by the data owner among the servers who together perform the k-means and return the cluster labels to the owner. The major challenge in this method is to prevent the servers from gaining substantial information about the actual data of the owner. Several algorithms have been designed in the past that provide cryptographic solutions to perform privacy preserving k-means. We provide a new method to perform k-means over a large set using multiple servers. Our technique avoids heavy cryptographic computations and instead we use a simple randomization technique to preserve the privacy of the data. The k-means computed has exactly the same efficiency and accuracy as the k-means computed over the original dataset without any randomization. We argue that our algorithm is secure against honest but curious and passive adversary.Comment: 19 pages, 4 tables. International Conference on Information Systems Security. Springer, Cham, 201

    Making GDPR Usable: A Model to Support Usability Evaluations of Privacy

    Full text link
    We introduce a new model for evaluating privacy that builds on the criteria proposed by the EuroPriSe certification scheme by adding usability criteria. Our model is visually represented through a cube, called Usable Privacy Cube (or UP Cube), where each of its three axes of variability captures, respectively: rights of the data subjects, privacy principles, and usable privacy criteria. We slightly reorganize the criteria of EuroPriSe to fit with the UP Cube model, i.e., we show how EuroPriSe can be viewed as a combination of only rights and principles, forming the two axes at the basis of our UP Cube. In this way we also want to bring out two perspectives on privacy: that of the data subjects and, respectively, that of the controllers/processors. We define usable privacy criteria based on usability goals that we have extracted from the whole text of the General Data Protection Regulation. The criteria are designed to produce measurements of the level of usability with which the goals are reached. Precisely, we measure effectiveness, efficiency, and satisfaction, considering both the objective and the perceived usability outcomes, producing measures of accuracy and completeness, of resource utilization (e.g., time, effort, financial), and measures resulting from satisfaction scales. In the long run, the UP Cube is meant to be the model behind a new certification methodology capable of evaluating the usability of privacy, to the benefit of common users. For industries, considering also the usability of privacy would allow for greater business differentiation, beyond GDPR compliance.Comment: 41 pages, 2 figures, 1 table, and appendixe

    Computational fact checking from knowledge networks

    Get PDF
    Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation

    Usability and security of out-of-band channels in secure device pairing protocols.

    No full text
    Initiating and bootstrapping secure, yet low-cost, ad-hoc transactions is an important challenge that needs to be overcome if the promise of mobile and pervasive computing is to be fulfilled. For example, mobile payment applications would benefit from the ability to pair devices securely without resorting to conventional mechanisms such as shared secrets, a Public Key Infrastructure (PKI), or trusted third parties. A number of methods have been proposed for doing this based on the use of a secondary out-of-band (OOB) channel that either authenticates information passed over the normal communication channel or otherwise establishes an authenticated shared secret which can be used for subsequent secure communication. A key element of the success of these methods is dependent on the performance and effectiveness of the OOB channel, which usually depends on people performing certain critical tasks correctly. In this paper, we present the results of a comparative usability study on methods that propose using humans to implement the OOB channel and argue that most of these proposals fail to take into account factors that may seriously harm the security and usability of a protocol. Our work builds on previous research in the usability of pairing methods and the accompanying recommendations for designing user interfaces that minimise human mistakes. Our findings show that the traditional methods of comparing and typing short strings into mobile devices are still preferable despite claims that new methods are more usable and secure, and that user interface design alone is not su cient in mitigating human mistakes in OOB channels

    Two heads are better than one: security and usability of device associations in group scenarios.

    No full text
    We analyse and evaluate the usability and security of the process of bootstrapping security among devices in group scenarios. While a lot of work has been done in single user scenarios, we are not aware of any that focusses on group situations. Unlike in single user scenarios, bootstrapping security in a group requires coordination, attention, and cooperation of all group members. In this paper, we provide an analysis of the security and usability of bootstrapping security in group scenarios and present the results of a usability study on these scenarios. We also highlight crucial factors necessary for designing for secure group interactions. © 2010 ACM

    Outlook: Related Fields

    No full text

    Diagrammatically formalising constraints of a privacy ontology

    Full text link
    © Springer International Publishing AG, part of Springer Nature 2018. Data privacy is a cross-cutting concern for many software projects. We reify a philosophically inspired model for data privacy into a concept diagram. From the concept diagram we extract the privacy constraints and demonstrate one mechanism for translating the constraints into executable software

    Iconified representations of privacy policies:A GDPR perspective

    No full text

    Privacy Policy Annotation for Semi-automated Analysis: A Cost-Effective Approach

    No full text
    International audiencePrivacy policies go largely unread as they are not standardized, often written in jargon, and frequently long. Several attempts have been made to simplify and improve readability with varying degrees of success. This paper looks at keyword extraction, comparing human extraction to natural language algorithms as a first step in building a taxonomy for creating an ontology (a key tool in improving access and usability of privacy policies).In this paper, we present two alternatives to using costly domain experts are used to perform keyword extraction: trained participants (non-domain experts) read and extracted keywords from online privacy policies; and second, supervised and unsupervised learning algorithms extracted keywords. Results show that supervised learning algorithm outperform unsupervised learning algorithms over a large corpus of 631 policies, and that trained participants outperform the algorithms, but at a much higher cost

    A Middleware Enforcing Location Privacy in Mobile Platform

    No full text
    Emerging indoor positioning and WiFi infrastructure enable building apps with numerous Location-based Services (LBS) that represent critical threats to smartphone users' location privacy provoking continuous tracking, profiling and unauthorized identification. Currently, the app eco-system relies on permission-based access control, which is proven ineffective at controlling how third party apps and/or library developers use and share users' data. In this paper we present the design, deployment and evaluation of PL-Protector, a location privacy-enhancing middleware, which through a caching technique minimises the interaction and data collection from wireless access points, content distributors and location providers. PL-Protector also provides a new series of control settings and privacy rules over both, the information and control flows between sources and sinks, to prevent user information disclosure during LBS queries. We implement PL-Protector on Android 6, and conduct experiments with real apps from five different categories of location-based services such as instant messaging and navigation. Experiments demonstrate acceptable delay overheads (lower than 22 milliseconds) within practical limits; hence, our middleware is practical, secure and efficient for location-demanding apps
    corecore